背景和目标:现有的医学图像分割的深度学习平台主要集中于完全监督的细分,该分段假设可以使用充分而准确的像素级注释。我们旨在开发一种新的深度学习工具包,以支持对医学图像分割的注释有效学习,该学习可以加速并简单地开发具有有限注释预算的深度学习模型,例如,从部分,稀疏或嘈杂的注释中学习。方法:我们提出的名为Pymic的工具包是用于医学图像分割任务的模块化深度学习平台。除了支持开发高性能模型以进行全面监督分割的基本组件外,它还包含几个高级组件,这些高级组件是针对从不完善的注释中学习的几个高级组件,例如加载带注释和未经通知的图像,未经通知的,部分或无效的注释图像的损失功能,以及多个网络之间共同学习的培训程序。Pymic构建了Pytorch框架,并支持半监督,弱监督和噪声的学习方法用于医学图像分割。结果:我们介绍了基于PYMIC的四个说明性医学图像细分任务:(1)在完全监督的学习上实现竞争性能; (2)半监督心脏结构分割,只有10%的训练图像; (3)使用涂鸦注释弱监督的分割; (4)从嘈杂的标签中学习以进行胸部X光片分割。结论:Pymic工具包易于使用,并促进具有不完美注释的医学图像分割模型的有效开发。它是模块化和灵活的,它使研究人员能够开发出低注释成本的高性能模型。源代码可在以下网址获得:https://github.com/hilab-git/pymic。
translated by 谷歌翻译
3D医学图像分割中卷积神经网络(CNN)的成功取决于大量的完全注释的3D体积,用于训练,这些训练是耗时且劳动力密集的。在本文中,我们建议在3D医学图像中只有7个点注释分段目标,并设计一个两阶段弱监督的学习框架PA-SEG。在第一阶段,我们采用大地距离变换来扩展种子点以提供更多的监督信号。为了在培训期间进一步处理未注释的图像区域,我们提出了两种上下文正则化策略,即多视图条件随机场(MCRF)损失和差异最小化(VM)损失,其中第一个鼓励具有相似特征的像素以具有一致的标签,第二个分别可以最大程度地减少分段前景和背景的强度差异。在第二阶段,我们使用在第一阶段预先训练的模型获得的预测作为伪标签。为了克服伪标签中的噪音,我们引入了一种自我和交叉监测(SCM)策略,该策略将自我训练与跨知识蒸馏(CKD)结合在主要模型和辅助模型之间,该模型从彼此生成的软标签中学习。在公共数据集的前庭造型瘤(VS)分割和脑肿瘤分割(BRAT)上的实验表明,我们在第一阶段训练的模型优于现有的最先进的弱监督方法,并在使用SCM之后,以提供其他scm来获得其他额外的scm培训,与Brats数据集中完全有监督的对应物相比,该模型可以实现竞争性能。
translated by 谷歌翻译
心肌活力的评估对于患有心肌梗塞的患者的诊断和治疗管理是必不可少的,并且心肌病理学的分类是本评估的关键。这项工作定义了医学图像分析的新任务,即进行心肌病理分割(MYOPS)结合三个序列的心脏磁共振(CMR)图像,该图像首次与Mycai 2020一起在Myops挑战中提出的。挑战提供了45个配对和预对准的CMR图像,允许算法将互补信息与三个CMR序列组合到病理分割。在本文中,我们提供了挑战的详细信息,从十五个参与者的作品调查,并根据五个方面解释他们的方法,即预处理,数据增强,学习策略,模型架构和后处理。此外,我们对不同因素的结果分析了结果,以检查关键障碍和探索解决方案的潜力,以及为未来的研究提供基准。我们得出结论,虽然报告了有前途的结果,但研究仍处于早期阶段,在成功应用于诊所之前需要更深入的探索。请注意,MyOPS数据和评估工具继续通过其主页(www.sdspeople.fudan.edu.cn/zhuangxiahai/0/myops20 /)注册注册。
translated by 谷歌翻译
Reinforcement learning (RL) problems can be challenging without well-shaped rewards. Prior work on provably efficient RL methods generally proposes to address this issue with dedicated exploration strategies. However, another way to tackle this challenge is to reformulate it as a multi-task RL problem, where the task space contains not only the challenging task of interest but also easier tasks that implicitly function as a curriculum. Such a reformulation opens up the possibility of running existing multi-task RL methods as a more efficient alternative to solving a single challenging task from scratch. In this work, we provide a theoretical framework that reformulates a single-task RL problem as a multi-task RL problem defined by a curriculum. Under mild regularity conditions on the curriculum, we show that sequentially solving each task in the multi-task RL problem is more computationally efficient than solving the original single-task problem, without any explicit exploration bonuses or other exploration strategies. We also show that our theoretical insights can be translated into an effective practical learning algorithm that can accelerate curriculum learning on simulated robotic tasks.
translated by 谷歌翻译
In recent years, large amounts of effort have been put into pushing forward the real-world application of dynamic digital human (DDH). However, most current quality assessment research focuses on evaluating static 3D models and usually ignores motion distortions. Therefore, in this paper, we construct a large-scale dynamic digital human quality assessment (DDH-QA) database with diverse motion content as well as multiple distortions to comprehensively study the perceptual quality of DDHs. Both model-based distortion (noise, compression) and motion-based distortion (binding error, motion unnaturalness) are taken into consideration. Ten types of common motion are employed to drive the DDHs and a total of 800 DDHs are generated in the end. Afterward, we render the video sequences of the distorted DDHs as the evaluation media and carry out a well-controlled subjective experiment. Then a benchmark experiment is conducted with the state-of-the-art video quality assessment (VQA) methods and the experimental results show that existing VQA methods are limited in assessing the perceptual loss of DDHs. The database will be made publicly available to facilitate future research.
translated by 谷歌翻译
Data-Free Class Incremental Learning (DFCIL) aims to sequentially learn tasks with access only to data from the current one. DFCIL is of interest because it mitigates concerns about privacy and long-term storage of data, while at the same time alleviating the problem of catastrophic forgetting in incremental learning. In this work, we introduce robust saliency guidance for DFCIL and propose a new framework, which we call RObust Saliency Supervision (ROSS), for mitigating the negative effect of saliency drift. Firstly, we use a teacher-student architecture leveraging low-level tasks to supervise the model with global saliency. We also apply boundary-guided saliency to protect it from drifting across object boundaries at intermediate layers. Finally, we introduce a module for injecting and recovering saliency noise to increase robustness of saliency preservation. Our experiments demonstrate that our method can retain better saliency maps across tasks and achieve state-of-the-art results on the CIFAR-100, Tiny-ImageNet and ImageNet-Subset DFCIL benchmarks. Code will be made publicly available.
translated by 谷歌翻译
Vision Transformers convert images to sequences by slicing them into patches. The size of these patches controls a speed/accuracy tradeoff, with smaller patches leading to higher accuracy at greater computational cost, but changing the patch size typically requires retraining the model. In this paper, we demonstrate that simply randomizing the patch size at training time leads to a single set of weights that performs well across a wide range of patch sizes, making it possible to tailor the model to different compute budgets at deployment time. We extensively evaluate the resulting model, which we call FlexiViT, on a wide range of tasks, including classification, image-text retrieval, open-world detection, panoptic segmentation, and semantic segmentation, concluding that it usually matches, and sometimes outperforms, standard ViT models trained at a single patch size in an otherwise identical setup. Hence, FlexiViT training is a simple drop-in improvement for ViT that makes it easy to add compute-adaptive capabilities to most models relying on a ViT backbone architecture. Code and pre-trained models are available at https://github.com/google-research/big_vision
translated by 谷歌翻译
The recurrent structure is a prevalent framework for the task of video super-resolution, which models the temporal dependency between frames via hidden states. When applied to real-world scenarios with unknown and complex degradations, hidden states tend to contain unpleasant artifacts and propagate them to restored frames. In this circumstance, our analyses show that such artifacts can be largely alleviated when the hidden state is replaced with a cleaner counterpart. Based on the observations, we propose a Hidden State Attention (HSA) module to mitigate artifacts in real-world video super-resolution. Specifically, we first adopt various cheap filters to produce a hidden state pool. For example, Gaussian blur filters are for smoothing artifacts while sharpening filters are for enhancing details. To aggregate a new hidden state that contains fewer artifacts from the hidden state pool, we devise a Selective Cross Attention (SCA) module, in which the attention between input features and each hidden state is calculated. Equipped with HSA, our proposed method, namely FastRealVSR, is able to achieve 2x speedup while obtaining better performance than Real-BasicVSR. Codes will be available at https://github.com/TencentARC/FastRealVSR
translated by 谷歌翻译
Semantic segmentation usually benefits from global contexts, fine localisation information, multi-scale features, etc. To advance Transformer-based segmenters with these aspects, we present a simple yet powerful semantic segmentation architecture, termed as IncepFormer. IncepFormer has two critical contributions as following. First, it introduces a novel pyramid structured Transformer encoder which harvests global context and fine localisation features simultaneously. These features are concatenated and fed into a convolution layer for final per-pixel prediction. Second, IncepFormer integrates an Inception-like architecture with depth-wise convolutions, and a light-weight feed-forward module in each self-attention layer, efficiently obtaining rich local multi-scale object features. Extensive experiments on five benchmarks show that our IncepFormer is superior to state-of-the-art methods in both accuracy and speed, e.g., 1) our IncepFormer-S achieves 47.7% mIoU on ADE20K which outperforms the existing best method by 1% while only costs half parameters and fewer FLOPs. 2) Our IncepFormer-B finally achieves 82.0% mIoU on Cityscapes dataset with 39.6M parameters. Code is available:github.com/shendu0321/IncepFormer.
translated by 谷歌翻译
A common scenario of Multilingual Neural Machine Translation (MNMT) is that each translation task arrives in a sequential manner, and the training data of previous tasks is unavailable. In this scenario, the current methods suffer heavily from catastrophic forgetting (CF). To alleviate the CF, we investigate knowledge distillation based life-long learning methods. Specifically, in one-tomany scenario, we propose a multilingual distillation method to make the new model (student) jointly learn multilingual output from old model (teacher) and new task. In many-to one scenario, we find that direct distillation faces the extreme partial distillation problem, and we propose two different methods to address it: pseudo input distillation and reverse teacher distillation. The experimental results on twelve translation tasks show that the proposed methods can better consolidate the previous knowledge and sharply alleviate the CF.
translated by 谷歌翻译